We investigate the learning rate of multiple kernel leaning (MKL) withelastic-net regularization, which consists of an $\ell_1$-regularizer forinducing the sparsity and an $\ell_2$-regularizer for controlling thesmoothness. We focus on a sparse setting where the total number of kernels islarge but the number of non-zero components of the ground truth is relativelysmall, and prove that elastic-net MKL achieves the minimax learning rate on the$\ell_2$-mixed-norm ball. Our bound is sharper than the convergence rates evershown, and has a property that the smoother the truth is, the faster theconvergence rate is.
展开▼